56 research outputs found

    Asymptotic Weight Enumerators of Randomly Punctured, Expurgated, and Shortened Code Ensembles

    Get PDF
    In this paper, we examine the effect of random puncturing, expurgating, and shortening on the asymptotic weight enumerator of certain linear code ensembles. We begin by discussing the actions of the three alteration methods on individual codes. We derive expressions for the average resulting code weight enumerator under each alteration. We then extend these results to the spectral shape of linear code ensembles whose original spectral shape is known, and demonstrate our findings on two specific code ensembles: the Shannon ensemble and the regular (j, k) Gallager ensemble

    Secure multi-party protocols under a modern lens

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 263-272).A secure multi-party computation (MPC) protocol for computing a function f allows a group of parties to jointly evaluate f over their private inputs, such that a computationally bounded adversary who corrupts a subset of the parties can not learn anything beyond the inputs of the corrupted parties and the output of the function f. General MPC completeness theorems in the 1980s showed that every efficiently computable function can be evaluated securely in this fashion [Yao86, GMW87, CCD87, BGW88] using the existence of cryptography. In the following decades, progress has been made toward making MPC protocols efficient enough to be deployed in real-world applications. However, recent technological developments have brought with them a slew of new challenges, from new security threats to a question of whether protocols can scale up with the demand of distributed computations on massive data. Before one can make effective use of MPC, these challenges must be addressed. In this thesis, we focus on two lines of research toward this goal: " Protocols resilient to side-channel attacks. We consider a strengthened adversarial model where, in addition to corrupting a subset of parties, the adversary may leak partial information on the secret states of honest parties during the protocol. In presence of such adversary, we first focus on preserving the correctness guarantees of MPC computations. We then proceed to address security guarantees, using cryptography. We provide two results: an MPC protocol whose security provably "degrades gracefully" with the amount of leakage information obtained by the adversary, and a second protocol which provides complete security assuming a (necessary) one-time preprocessing phase during which leakage cannot occur. * Protocols with scalable communication requirements. We devise MPC protocols with communication locality: namely, each party only needs to communicate with a small (polylog) number of dynamically chosen parties. Our techniques use digital signatures and extend particularly well to the case when the function f is a sublinear algorithm whose execution depends on o(n) of the n parties' inputs.by Elette Chantae Boyle.Ph.D

    Breaking the O(n)O(\sqrt n)-Bit Barrier: Byzantine Agreement with Polylog Bits Per Party

    Full text link
    Byzantine agreement (BA), the task of nn parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is O~(1)\tilde O(1) bits per party, but some parties must send Ω(n)\Omega(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating O~(n)\tilde O(\sqrt{n}). In this work, we ask whether asymmetry is inherent for optimizing total communication. Our contributions in this line are as follows: 1) We define a cryptographic primitive, succinctly reconstructed distributed signatures (SRDS), that suffices for constructing O~(1)\tilde O(1) balanced BA. We provide two constructions of SRDS from different cryptographic and Public-Key Infrastructure (PKI) assumptions. 2) The SRDS-based BA follows a paradigm of boosting from "almost-everywhere" agreement to full agreement, and does so in a single round. We prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends o(n)o(n) messages. 3) We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product). Our results provide new approaches forward, as well as limitations and barriers, towards minimizing per-party communication of BA. In particular, we construct the first two BA protocols with O~(1)\tilde O(1) balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09)

    Adversarially Robust Property-Preserving Hash Functions

    Get PDF
    Property-preserving hashing is a method of compressing a large input x into a short hash h(x) in such a way that given h(x) and h(y), one can compute a property P(x, y) of the original inputs. The idea of property-preserving hash functions underlies sketching, compressed sensing and locality-sensitive hashing. Property-preserving hash functions are usually probabilistic: they use the random choice of a hash function from a family to achieve compression, and as a consequence, err on some inputs. Traditionally, the notion of correctness for these hash functions requires that for every two inputs x and y, the probability that h(x) and h(y) mislead us into a wrong prediction of P(x, y) is negligible. As observed in many recent works (incl. Mironov, Naor and Segev, STOC 2008; Hardt and Woodruff, STOC 2013; Naor and Yogev, CRYPTO 2015), such a correctness guarantee assumes that the adversary (who produces the offending inputs) has no information about the hash function, and is too weak in many scenarios. We initiate the study of adversarial robustness for property-preserving hash functions, provide definitions, derive broad lower bounds due to a simple connection with communication complexity, and show the necessity of computational assumptions to construct such functions. Our main positive results are two candidate constructions of property-preserving hash functions (achieving different parameters) for the (promise) gap-Hamming property which checks if x and y are "too far" or "too close". Our first construction relies on generic collision-resistant hash functions, and our second on a variant of the syndrome decoding assumption on low-density parity check codes

    Fourier-based Function Secret Sharing with General Access Structure

    Full text link
    Function secret sharing (FSS) scheme is a mechanism that calculates a function f(x) for x in {0,1}^n which is shared among p parties, by using distributed functions f_i:{0,1}^n -> G, where G is an Abelian group, while the function f:{0,1}^n -> G is kept secret to the parties. Ohsawa et al. in 2017 observed that any function f can be described as a linear combination of the basis functions by regarding the function space as a vector space of dimension 2^n and gave new FSS schemes based on the Fourier basis. All existing FSS schemes are of (p,p)-threshold type. That is, to compute f(x), we have to collect f_i(x) for all the distributed functions. In this paper, as in the secret sharing schemes, we consider FSS schemes with any general access structure. To do this, we observe that Fourier-based FSS schemes by Ohsawa et al. are compatible with linear secret sharing scheme. By incorporating the techniques of linear secret sharing with any general access structure into the Fourier-based FSS schemes, we show Fourier-based FSS schemes with any general access structure.Comment: 12 page

    Limits of Extractability Assumptions with Distributional Auxiliary Input

    Get PDF
    Extractability, or “knowledge,” assumptions have recently gained popularity in the crypto- graphic community, leading to the study of primitives such as extractable one-way functions, extractable hash functions, succinct non-interactive arguments of knowledge (SNARKs), and (public-coin) differing-inputs obfuscation ((PC-)diO), and spurring the development of a wide spectrum of new applications relying on these primitives. For most of these applications, it is required that the extractability assumption holds even in the presence of attackers receiving some auxiliary information that is sampled from some fixed efficiently computable distribution Z. We show that, assuming the existence of public-coin collision-resistant hash functions, there exists an efficient distributions Z such that either - PC-diO for Turing machines does not exist, or - extractable one-way functions w.r.t. auxiliary input Z do not exist. A corollary of this result shows that additionally assuming existence of fully homomorphic encryption with decryption in NC1, there exists an efficient distribution Z such that either - SNARKs for NP w.r.t. auxiliary input Z do not exist, or - PC-diO for NC1 circuits does not exist. To achieve our results, we develop a “succinct punctured program” technique, mirroring the powerful punctured program technique of Sahai and Waters (STOC’14), and present several other applications of this new technique. In particular, we construct succinct perfect zero knowledge SNARGs and give a universal instantiation of random oracles in full-domain hash applications, based on PC-diO. As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability as- sumptions. We show that (standard) diO w.r.t. any distribution D over programs and bounded-length auxiliary input is directly implied by any obfuscator that satisfies the weaker indistinguishability obfuscation (iO) security notion and diO for a slightly modified distribution D′ of programs (of slightly greater size) and no auxiliary input. As a consequence, we directly obtain negative results for (standard) diO in the absence of auxiliary input

    The Bottleneck Complexity of Secure Multiparty Computation

    Get PDF
    In this work, we initiate the study of bottleneck complexity as a new communication efficiency measure for secure multiparty computation (MPC). Roughly, the bottleneck complexity of an MPC protocol is defined as the maximum communication complexity required by any party within the protocol execution. We observe that even without security, bottleneck communication complexity is an interesting measure of communication complexity for (distributed) functions and propose it as a fundamental area to explore. While achieving O(n) bottleneck complexity (where n is the number of parties) is straightforward, we show that: (1) achieving sublinear bottleneck complexity is not always possible, even when no security is required. (2) On the other hand, several useful classes of functions do have o(n) bottleneck complexity, when no security is required. Our main positive result is a compiler that transforms any (possibly insecure) efficient protocol with fixed communication-pattern for computing any functionality into a secure MPC protocol while preserving the bottleneck complexity of the underlying protocol (up to security parameter overhead). Given our compiler, an efficient protocol for any function f with sublinear bottleneck complexity can be transformed into an MPC protocol for f with the same bottleneck complexity. Along the way, we build cryptographic primitives - incremental fully-homomorphic encryption, succinct non-interactive arguments of knowledge with ID-based simulation-extractability property and verifiable protocol execution - that may be of independent interest

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≀ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≀ m/100, and L0 is the same as L f is the Majority function, and t ≀ m/100 f is the OR function, t ≀ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Communication Lower Bounds for Cryptographic Broadcast Protocols

    Full text link
    Broadcast protocols enable a set of nn parties to agree on the input of a designated sender, even facing attacks by malicious parties. In the honest-majority setting, randomization and cryptography were harnessed to achieve low-communication broadcast with sub-quadratic total communication and balanced sub-linear cost per party. However, comparatively little is known in the dishonest-majority setting. Here, the most communication-efficient constructions are based on Dolev and Strong (SICOMP '83), and sub-quadratic broadcast has not been achieved. On the other hand, the only nontrivial ω(n)\omega(n) communication lower bounds are restricted to deterministic protocols, or against strong adaptive adversaries that can perform "after the fact" removal of messages. We provide new communication lower bounds in this space, which hold against arbitrary cryptography and setup assumptions, as well as a simple protocol showing near tightness of our first bound. 1) We demonstrate a tradeoff between resiliency and communication for protocols secure against n−o(n)n-o(n) static corruptions. For example, Ω(n⋅polylog(n))\Omega(n\cdot {\sf polylog}(n)) messages are needed when the number of honest parties is n/polylog(n)n/{\sf polylog}(n); Ω(nn)\Omega(n\sqrt{n}) messages are needed for O(n)O(\sqrt{n}) honest parties; and Ω(n2)\Omega(n^2) messages are needed for O(1)O(1) honest parties. Complementarily, we demonstrate broadcast with O(n⋅polylog(n))O(n\cdot{\sf polylog}(n)) total communication facing any constant fraction of static corruptions. 2) Our second bound considers n/2+kn/2 + k corruptions and a weakly adaptive adversary that cannot remove messages "after the fact." We show that any broadcast protocol within this setting can be attacked to force an arbitrary party to send messages to kk other parties. This rules out, for example, broadcast facing 51% corruptions in which all non-sender parties have sublinear communication locality.Comment: A preliminary version of this work appeared in DISC 202

    Foundations of Homomorphic Secret Sharing

    Get PDF
    Homomorphic secret sharing (HSS) is the secret sharing analogue of homomorphic encryption. An HSS scheme supports a local evaluation of functions on shares of one or more secret inputs, such that the resulting shares of the output are short. Some applications require the stronger notion of additive HSS, where the shares of the output add up to the output over some finite Abelian group. While some strong positive results for HSS are known under specific cryptographic assumptions, many natural questions remain open. We initiate a systematic study of HSS, making the following contributions. - A definitional framework. We present a general framework for defining HSS schemes that unifies and extends several previous notions from the literature, and cast known results within this framework. - Limitations. We establish limitations on information-theoretic multi-input HSS with short output shares via a relation with communication complexity. We also show that additive HSS for non-trivial functions, even the AND of two input bits, implies non-interactive key exchange, and is therefore unlikely to be implied by public-key encryption or even oblivious transfer. - Applications. We present two types of applications of HSS. First, we construct 2-round protocols for secure multiparty computation from a simple constant-size instance of HSS. As a corollary, we obtain 2-round protocols with attractive asymptotic efficiency features under the Decision Diffie Hellman (DDH) assumption. Second, we use HSS to obtain nearly optimal worst-case to average-case reductions in P. This in turn has applications to fine-grained average-case hardness and verifiable computation
    • 

    corecore